77 research outputs found

    Learning from life-logging data by hybrid HMM: a case study on active states prediction

    Get PDF
    In this paper, we have proposed employing a hybrid classifier-hidden Markov model (HMM) as a supervised learning approach to recognize daily active states from sequential life-logging data collected from wearable sensors. We generate synthetic data from real dataset to cope with noise and incompleteness for training purpose and, in conjunction with HMM, propose using a multiobjective genetic programming (MOGP) classifier in comparison of the support vector machine (SVM) with variant kernels. We demonstrate that the system with either algorithm works effectively to recognize personal active states regarding medical reference. We also illustrate that MOGP yields generally better results than SVM without requiring an ad hoc kernel

    Automatic Graph Cut Segmentation of Lesions in CT Using Mean Shift Superpixels

    Get PDF
    This paper presents a new, automatic method of accurately extracting lesions from CT data. It first determines, at each voxel, a five-dimensional (5D) feature vector that contains intensity, shape index, and 3D spatial location. Then, nonparametric mean shift clustering forms superpixels from these 5D features, resulting in an oversegmentation of the image. Finally, a graph cut algorithm groups the superpixels using a novel energy formulation that incorporates shape, intensity, and spatial features. The mean shift superpixels increase the robustness of the result while reducing the computation time. We assume that the lesion is part spherical, resulting in high shape index values in a part of the lesion. From these spherical subregions, foreground and background seeds for the graph cut segmentation can be automatically obtained. The proposed method has been evaluated on a clinical CT dataset. Visual inspection on different types of lesions (lung nodules and colonic polyps), as well as a quantitative evaluation on 101 solid and 80 GGO nodules, both demonstrate the potential of the proposed method. The joint spatial-intensity-shape features provide a powerful cue for successful segmentation of lesions adjacent to structures of similar intensity but different shape, as well as lesions exhibiting partial volume effect

    Automatic grade classification of Barretts Esophagus through feature enhancement

    Get PDF
    Barretts Esophagus (BE) is a precancerous condition that affects the esophagus tube and has the risk of develop- ing esophageal adenocarcinoma. BE is the process of developing metaplastic intestinal epithelium and replacing the normal cells in the esophageal area. The detection of BE is considered difficult due to its appearance and properties. The diagnosis is usually done through both endoscopy and biopsy. Recently, Computer Aided Diag- nosis systems have been developed to support physicians opinion when facing difficulty in detection/classification in different types of diseases. In this paper, an automatic classification of Barretts Esophagus condition is intro- duced. The presented method enhances the internal features of a Confocal Laser Endomicroscopy (CLE) image by utilizing a proposed enhancement filter. This filter depends on fractional differentiation and integration that improve the features in the discrete wavelet transform of an image. Later on, various features are extracted from each enhanced image on different levels for the multi-classification process. Our approach is validated on a dataset that consists of a group of 32 patients with 262 images with different histology grades. The experimental results demonstrated the efficiency of the proposed technique. Our method helps clinicians for more accurate classification. This potentially helps to reduce the need for biopsies needed for diagnosis, facilitate the regular monitoring of treatment/development of the patients case and can help train doctors with the new endoscopy technology. The accurate automatic classification is particularly important for the Intestinal Metaplasia (IM) type, which could turn into deadly cancerous. Hence, this work contributes to automatic classification that facilitates early intervention/treatment and decreasing biopsy samples needed

    ResDUnet: Residual Dilated UNet for Left Ventricle Segmentation from Echocardiographic Images

    Get PDF
    Echocardiography is the modality of choice for the assessment of left ventricle function. Left ventricle is responsible for pumping blood rich in oxygen to all body parts. Segmentation of this chamber from echocardiographic images is a challenging task, due to the ambiguous boundary and inhomogeneous intensity distribution. In this paper we propose a novel deep learning model named ResDUnet. The model is based on U-net incorporated with dilated convolution, where residual blocks are employed instead of the basic U-net units to ease the training process. Each block is enriched with squeeze and excitation unit for channel-wise attention and adaptive feature re-calibration. To tackle the problem of left ventricle shape and size variability, we chose to enrich the process of feature concatenation in U-net by integrating feature maps generated by cascaded dilation. Cascaded dilation broadens the receptive field size in comparison with traditional convolution, which allows the generation of multi-scale information which in turn results in a more robust segmentation. Performance measures were evaluated on a publicly available dataset of 500 patients with large variability in terms of quality and patients pathology. The proposed model shows a dice similarity increase of 8.4% when compared to deeplabv3 and 1.2% when compared to the basic U-net architecture. Experimental results demonstrate the potential use in clinical domain

    Regression analysis for paths inference in a novel Proton CT system

    Get PDF
    In this work, we analyse the proton paths inference for the construction of CT imagery based on a new proton CT proton system, which can record multiple proton paths/residual energies. Based on the recorded paths of multiple protons, every proton path is inferred. The inferred proton paths can then be used for the residual energies detection and CT imagery construction for analyzing a specific tissue. Different regression methods (linear regression and Gaussian process regression models) are exploited for the path inference of every proton in this work. The studies on a recorded proton trajectories dataset show that the Gaussian process regression method achieves better accuracies for the path inference, from both path assignment accuracy and root mean square errors (RMSEs) studies

    Rethinking the transfer learning for FCN based polyp segmentation in colonoscopy

    Get PDF
    Besides the complex nature of colonoscopy frames with intrinsic frame formation artefacts such as light reflections and the diversity of polyp types/shapes, the publicly available polyp segmentation training datasets are limited, small and imbalanced. In this case, the automated polyp segmentation using a deep neural network remains an open challenge due to the overfitting of training on small datasets. We proposed a simple yet effective polyp segmentation pipeline that couples the segmentation (FCN) and classification (CNN) tasks. We find the effectiveness of interactive weight transfer between dense and coarse vision tasks that mitigates the overfitting in learning. And It motivates us to design a new training scheme within our segmentation pipeline. Our method is evaluated on CVC-EndoSceneStill and Kvasir-SEG datasets. It achieves 4.34% and 5.70% Polyp-IoU improvements compared to the state-of-the-art methods on the EndoSceneStill and Kvasir-SEG datasets, respectively.Comment: 11 pages, 10 figures, submit versio

    Learning spatiotemporal features for esophageal abnormality detection from endoscopic videos

    Get PDF
    Esophageal cancer is categorized as a type of disease with a high mortality rate. Early detection of esophageal abnormalities (i.e. precancerous and early can- cerous) can improve the survival rate of the patients. Re- cent deep learning-based methods for selected types of esophageal abnormality detection from endoscopic images have been proposed. However, no methods have been introduced in the literature to cover the detection from endoscopic videos, detection from challenging frames and detection of more than one esophageal abnormality type. In this paper, we present an efficient method to automat- ically detect different types of esophageal abnormalities from endoscopic videos. We propose a novel 3D Sequen- tial DenseConvLstm network that extracts spatiotemporal features from the input video. Our network incorporates 3D Convolutional Neural Network (3DCNN) and Convolu- tional Lstm (ConvLstm) to efficiently learn short and long term spatiotemporal features. The generated feature map is utilized by a region proposal network and ROI pooling layer to produce a bounding box that detects abnormal- ity regions in each frame throughout the video. Finally, we investigate a post-processing method named Frame Search Conditional Random Field (FS-CRF) that improves the overall performance of the model by recovering the missing regions in neighborhood frames within the same clip. We extensively validate our model on an endoscopic video dataset that includes a variety of esophageal ab- normalities. Our model achieved high performance using different evaluation metrics showing 93.7% recall, 92.7% precision, and 93.2% F-measure. Moreover, as no results have been reported in the literature for the esophageal abnormality detection from endoscopic videos, to validate the robustness of our model, we have tested the model on a publicly available colonoscopy video dataset, achieving the polyp detection performance in a recall of 81.18%, precision of 96.45% and F-measure 88.16%, compared to the state-of-the-art results of 78.84% recall, 90.51% preci- sion and 84.27% F-measure using the same dataset. This demonstrates that the proposed method can be adapted to different gastrointestinal endoscopic video applications with a promising performance

    Automatic image quality assessment and measurement of fetal head in two-dimensional ultrasound image

    Get PDF
    Owing to the inconsistent image quality existing in routine obstetric ultrasound (US) scans that leads to a large intraobserver and interobserver variability, the aim of this study is to develop a quality-assured, fully automated US fetal head measurement system. A texton-based fetal head segmentation is used as a prerequi- site step to obtain the head region. Textons are calculated using a filter bank designed specific for US fetal head structure. Both shape- and anatomic-based features calculated from the segmented head region are then fed into a random forest classifier to determine the quality of the image (e.g., whether the image is acquired from a correct imaging plane), from which fetal head measurements [biparietal diameter (BPD), occipital–frontal diam- eter (OFD), and head circumference (HC)] are derived. The experimental results show a good performance of our method for US quality assessment and fetal head measurements. The overall precision for automatic image quality assessment is 95.24% with 87.5% sensitivity and 100% specificity, while segmentation performance shows 99.27% (`0.26) of accuracy, 97.07% (`2.3) of sensitivity, 2.23 mm (`0.74) of the maximum symmetric contour distance, and 0.84 mm (`0.28) of the average symmetric contour distance. The statistical analysis results using paired t-test and Bland–Altman plots analysis indicate that the 95% limits of agreement for inter observer variability between the automated measurements and the senior expert measurements are 2.7 mm of BPD, 5.8 mm of OFD, and 10.4 mm of HC, whereas the mean differences are −0.038 ` 1.38 mm, −0.20 ` 2.98 mm, and −0.72 ` 5.36 mm, respectively. These narrow 95% limits of agreements indicate a good level of consistency between the automated and the senior expert’s measurements

    MIXR: A Standard Architecture for Medical Image Analysis in Augmented and Mixed Reality

    Get PDF
    Medical image analysis is evolving into a new dimension: where it will combine the power of AI and machine learning with real-time, real-space displays, namely Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) - known collectively as Extended Reality (XR). These devices, typically available as head-mounted displays, are enabling the move towards the complete transformation of how medical data is viewed, processed and analysed in clinical practice. There have been recent attempts on how XR gadgets can help in surgical planning and training of medics. However, the radiological front from a detection, diagnostics and prognosis remains unexplored. In this paper we propose a standard framework or architecture called Medical Imaging in Extended Reality (MIXR) for building medical image analysis applications in XR. MIXR consists of several components used in literature; however, tied together for reconstructing volume data in 3D space. Our focus here is on the reconstruction mechanism for CT and MRI data in XR; nevertheless, the framework we propose has applications beyond these modalities

    Esophageal Abnormality Detection Using DenseNet Based Faster R-CNN With Gabor Features

    Get PDF
    Early detection of esophageal abnormalities can help in preventing the progression of the disease into later stages. During esophagus examination, abnormalities are often overlooked due to the irregular shape, variable size, and the complex surrounding area which requires a significant effort and experience. In this paper, a novel deep learning model which is based on faster region-based convolutional neural network (Faster R-CNN) is presented to automatically detect abnormalities in the esophagus from endoscopic images. The proposed detection system is based on a combination of Gabor handcrafted features with the CNN features. The densely connected convolutional networks (DenseNets) architecture is embraced to extract the CNN features providing a strengthened feature propagation between the layers and alleviate the vanishing gradient problem. To address the challenges of detecting abnormal complex regions, we propose fusing extracted Gabor features with the CNN features through concatenation to enhance texture details in the detection stage. Our newly designed architecture is validated on two datasets (Kvasir and MICCAI 2015). Regarding the Kvasir, the results show an outstanding performance with a recall of 90.2% and a precision of 92.1% with a mean of average precision (mAP) of 75.9%. While for the MICCAI 2015 dataset, the model is able to surpass the state-of-the-art performance with 95% recall and 91% precision with mAP value of 84%. The experimental results demonstrate that the system is able to detect abnormalities in endoscopic images with good performance without any human intervention
    corecore